The Hard-Cut EM Algorithm for Mixture of Sparse Gaussian Processes
نویسندگان
چکیده
The mixture of Gaussian Processes (MGP) is a powerful and fast developed machine learning framework. In order to make its learning more efficient, certain sparsity constraints have been adopted to form the mixture of sparse Gaussian Processes (MSGP). However, the existing MGP and MSGP models are rather complicated and their learning algorithms involve various approximation schemes. In this paper, we refine the MSGP model and develop the hard-cut EM algorithm for MSGP from its original version for MGP. It is demonstrated by the experiments on both synthetic and real datasets that our refined MSGP model and the hard-cut EM algorithm are feasible and can outperform some typical regression algorithms on prediction. Moreover, with sparse technique, the parameter learning of our proposed MSGP model is much more efficient than that of the MGP model.
منابع مشابه
Image Segmentation using Gaussian Mixture Model
Abstract: Stochastic models such as mixture models, graphical models, Markov random fields and hidden Markov models have key role in probabilistic data analysis. In this paper, we used Gaussian mixture model to the pixels of an image. The parameters of the model were estimated by EM-algorithm. In addition pixel labeling corresponded to each pixel of true image was made by Bayes rule. In fact,...
متن کاملIMAGE SEGMENTATION USING GAUSSIAN MIXTURE MODEL
Stochastic models such as mixture models, graphical models, Markov random fields and hidden Markov models have key role in probabilistic data analysis. In this paper, we have learned Gaussian mixture model to the pixels of an image. The parameters of the model have estimated by EM-algorithm. In addition pixel labeling corresponded to each pixel of true image is made by Bayes rule. In fact, ...
متن کاملHigh-Dimensional Clustering with Sparse Gaussian Mixture Models
We consider the problem of clustering high-dimensional data using Gaussian Mixture Models (GMMs) with unknown covariances. In this context, the ExpectationMaximization algorithm (EM), which is typically used to learn GMMs, fails to cluster the data accurately due to the large number of free parameters in the covariance matrices. We address this weakness by assuming that the mixture model consis...
متن کاملThe Noisy Expectation-Maximization Algorithm
We present a noise-injected version of the Expectation-Maximization (EM) algorithm: the Noisy Expectation Maximization (NEM) algorithm. The NEM algorithm uses noise to speed up the convergence of the EM algorithm. The NEM theorem shows that additive noise speeds up the average convergence of the EM algorithm to a local maximum of the likelihood surface if a positivity condition holds. Corollary...
متن کاملA Mixture Model for Learning Sparse Representations
In a latent variable model, an overcomplete representation is one in which the number of latent variables is at least as large as the dimension of the data observations. Overcomplete representations have been advocated due to robustness in the presence of noise, the ability to be sparse, and an inherent flexibility in modeling the structure of data [9]. In this report, we modify factor analysis...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2015